Updating Probabilities
نویسندگان
چکیده
The Method of Maximum (relative) Entropy (ME) has been designed for updating from a prior distribution to a posterior distribution when the information being processed is in the form of a constraint on the family of allowed posteriors. This is in contrast with the usual MaxEnt which was designed as a method to assign, and not to update, probabilities. The objective of this paper is to strengthen the ME method in two ways. In [1] the axioms that define ME have been distilled down to three; here the design is improved by considerably weakening the axiom that refers to independent subsystems. Instead of the old axiom which read: “When a system is composed of subsystems that are believed to be independent it should not matter whether the inference procedure treats them separately or jointly” we now modify it by replacing the word ‘believed’ by the word ‘known’. As pointed out by Karbelkar and by Uffink the modified axiom is a much weaker consistency requirement, which, in their view, fails to single out the usual (logarithmic) relative entropy as the unique tool for updating. It merely restricts the form of the entropy to a one-dimensional continuum labeled by a parameter η; the resulting η-entropies are equivalent to the Renyi or the Tsallis entropies. We show that further applications of the same modified axiom select a unique, universal value for the parameter η and this value corresponds to the usual (logarithmic) relative entropy. The advantage of our new approach is that it shows precisely how it is that the other η-entropies are ruled out as tools for updating. Our second concern is mostly pedagogical. It concerns the relation between the ME method and Bayes’ rule. We start by drawing the distinction between Bayes’ theorem, which is a straightforward consequence of the product rule for probabilities, and Bayes’ rule, which is the actual updating rule. We show that Bayes’ rule can be derived as a special case of of the ME method. The virtue of our derivation, which hinges on translating the information in data into constraints that can be processed by ME, is that it is particularly clear. It throws light on Bayes’ rule and it shows the complete compatibility of Bayes’ updating with ME updating. References: [1] A. Caticha, “Relative Entropy and Inductive Inference,” in Bayesian Inference and Maximum Entropy Methods in Science and Engineering, ed. by G. Erickson and Y. Zhai, AIP Conf. Proc. 707, 75 (2004) (arXiv.org/abs/physics/0311093).
منابع مشابه
Minimum Information Updating with Specified Marginals in Probabilistic Expert Systems
A probability-updating method in probabilistic expert systems is considered in this paper based on the minimum discrimination information. Here, newly acquired information is taken as the latest true marginal probabilities, not as newly observed data with the same weight as previous data. Posterior probabilities are obtained by updating prior probabilities subject to the latest true marginals. ...
متن کاملA Continuous Updating Rule for Imprecise Probabilities
The paper studies the continuity of rules for updating imprecise probability models when new data are observed. Discontinuities can lead to robustness issues: this is the case for the usual updating rules of the theory of imprecise probabilities. An alternative, continuous updating rule is introduced.
متن کاملThe Ex-Ante Non-Optimality of the Dempster-Schafer Updating Rule for Ambiguous Beliefs (A Commentary on "Updating Ambiguous Beliefs' by Gilboa and Schmeidler)
The most widely used updating rule for non-additive probabilities is the Dempster-Schafer rule. Schmeidler and Gilboa have developed a model of decision making under uncertainty based on non-additive probabilities, and in their paper "Updating Ambiguous Beliefs" they justify the Dempster-Schafer rule based on a maximum likelihood procedure. This note shows in the context of Schmeidler-Gilboa pr...
متن کاملUpdating with incomplete observations
Currently, there is renewed interest in the problem, raised by Shafer in 1985, of updating probabilities when observations are incomplete (or setvalued). This is a fundamental problem, and of articular interest for Bayesian networks. Recently, Griinwald and Halpern have shown that commonly used updating strategies fail here, except under very special assumptions. We propose a new rule for updat...
متن کاملUpdating beliefs with incomplete observations
Currently, there is renewed interest in the problem, raised by Shafer in 1985, of updating probabilities when observations are incomplete (or set-valued). This is a fundamental problem in general, and of particular interest for Bayesian networks. Recently, Grünwald and Halpern have shown that commonly used updating strategies fail in this case, except under very special assumptions. In this pap...
متن کاملUnpredictability, Probability Updating and the Three Prisoners Paradox
This paper discusses the Three Prisoners paradox in the light of three different procedures for the updating of probabilities Bayesian conditioning, superconditioning and Jeffrey's rule as well as assuming the unpredictability of receipt of information by prisoner A. The formulation of the paradox in this temporal setting brings new insight to the problem and, on the other hand, the paradox is ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- CoRR
دوره abs/physics/0608185 شماره
صفحات -
تاریخ انتشار 2006